Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task.
translated by 谷歌翻译
算法追求要求为个人提供可操作的建议,以克服自动化决策系统所做的不利结果。求助建议理想地应对个人寻求追索权的特征具有相当小的不确定性。在这项工作中,我们制定了逆势稳健的追索问题,并表明追索方法提供最低售价的追索权无力。然后,我们提出用于在线性和可分辨率案例中产生对抗性稳健追索的方法。为了确保追索权是强劲的,要求个人努力,而不是否则的努力。为了将部分从决策者从决策者转移到决策者的稳健性负担,我们提出了一种模型规范器,鼓励寻求强大追索权的额外成本。我们展示了使用我们提出的模型规范器训练的分类器,依赖于无法解除的预测功能,提供可能更少努力的追索权。
translated by 谷歌翻译
With the growth of editing and sharing images through the internet, the importance of protecting the images' authorship has increased. Robust watermarking is a known approach to maintaining copyright protection. Robustness and imperceptibility are two factors that are tried to be maximized through watermarking. Usually, there is a trade-off between these two parameters. Increasing the robustness would lessen the imperceptibility of the watermarking. This paper proposes an adaptive method that determines the strength of the watermark embedding in different parts of the cover image regarding its texture and brightness. Adaptive embedding increases the robustness while preserving the quality of the watermarked image. Experimental results also show that the proposed method can effectively reconstruct the embedded payload in different kinds of common watermarking attacks. Our proposed method has shown good performance compared to a recent technique.
translated by 谷歌翻译
Recently, many attempts have been made to construct a transformer base U-shaped architecture, and new methods have been proposed that outperformed CNN-based rivals. However, serious problems such as blockiness and cropped edges in predicted masks remain because of transformers' patch partitioning operations. In this work, we propose a new U-shaped architecture for medical image segmentation with the help of the newly introduced focal modulation mechanism. The proposed architecture has asymmetric depths for the encoder and decoder. Due to the ability of the focal module to aggregate local and global features, our model could simultaneously benefit the wide receptive field of transformers and local viewing of CNNs. This helps the proposed method balance the local and global feature usage to outperform one of the most powerful transformer-based U-shaped models called Swin-UNet. We achieved a 1.68% higher DICE score and a 0.89 better HD metric on the Synapse dataset. Also, with extremely limited data, we had a 4.25% higher DICE score on the NeoPolyp dataset. Our implementations are available at: https://github.com/givkashi/Focal-UNet
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In classic reinforcement learning algorithms, agents make decisions at discrete and fixed time intervals. The physical duration between one decision and the next becomes a critical hyperparameter. When this duration is too short, the agent needs to make many decisions to achieve its goal, aggravating the problem's difficulty. But when this duration is too long, the agent becomes incapable of controlling the system. Physical systems, however, do not need a constant control frequency. For learning agents, it is desirable to operate with low frequency when possible and high frequency when necessary. We propose a framework called Continuous-Time Continuous-Options (CTCO), where the agent chooses options as sub-policies of variable durations. Such options are time-continuous and can interact with the system at any desired frequency providing a smooth change of actions. The empirical analysis shows that our algorithm is competitive w.r.t. other time-abstraction techniques, such as classic option learning and action repetition, and practically overcomes the difficult choice of the decision frequency.
translated by 谷歌翻译
We propose a principled way to define Gaussian process priors on various sets of unweighted graphs: directed or undirected, with or without loops. We endow each of these sets with a geometric structure, inducing the notions of closeness and symmetries, by turning them into a vertex set of an appropriate metagraph. Building on this, we describe the class of priors that respect this structure and are analogous to the Euclidean isotropic processes, like squared exponential or Mat\'ern. We propose an efficient computational technique for the ostensibly intractable problem of evaluating these priors' kernels, making such Gaussian processes usable within the usual toolboxes and downstream applications. We go further to consider sets of equivalence classes of unweighted graphs and define the appropriate versions of priors thereon. We prove a hardness result, showing that in this case, exact kernel computation cannot be performed efficiently. However, we propose a simple Monte Carlo approximation for handling moderately sized cases. Inspired by applications in chemistry, we illustrate the proposed techniques on a real molecular property prediction task in the small data regime.
translated by 谷歌翻译
当网络条件恶化时,视频会议系统的用户体验差,因为当前的视频编解码器根本无法在极低的比特率下运行。最近,已经提出了几种神经替代方案,可以使用每个框架的稀疏表示,例如面部地标信息,以非常低的比特率重建说话的头视频。但是,这些方法在通话过程中具有重大运动或遮挡的情况下会产生不良的重建,并且不会扩展到更高的分辨率。我们设计了Gemino,这是一种基于新型高频条件超分辨率管道的新型神经压缩系统,用于视频会议。 Gemino根据从单个高分辨率参考图像中提取的信息来增强高频细节(例如,皮肤纹理,头发等),为每个目标框架的一个非常低分辨率的版本(例如,皮肤纹理,头发等)。我们使用多尺度体系结构,该体系结构在不同的分辨率下运行模型的不同组件,从而使其扩展到可与720p相当的分辨率,并且我们个性化模型以学习每个人的特定细节,在低比特率上实现了更好的保真度。我们在AIORTC上实施了Gemino,这是WEBRTC的开源Python实现,并表明它在A100 GPU上实时在1024x1024视频上运行,比比特率的比特率低于传统的视频Codecs,以相同的感知质量。
translated by 谷歌翻译
道路建设项目维护运输基础设施。这些项目的范围从短期(例如,重新铺面或固定坑洼)到长期(例如,添加肩膀或建造桥梁)。传统上,确定下一个建设项目是什么以及安排什么何时进行安排,这是通过人类使用特殊设备的检查来完成的。这种方法是昂贵且难以扩展的。另一种选择是使用计算方法来整合和分析多种过去和现在的时空数据以预测未来道路构建的位置和时间。本文报告了这种方法,该方法使用基于深神经网络的模型来预测未来的结构。我们的模型在由构造,天气,地图和道路网络数据组成的异质数据集上应用卷积和经常性组件。我们还报告了如何通过构建一个名为“美国建设”的大型数据集来解决我们如何解决足够的公开数据,其中包括620万个道路构造案例,并通过各种时空属性和路线网络功能增强,收集了。在2016年至2021年之间的连续美国(美国)中。使用对美国几个主要城市进行广泛的实验,我们显示了工作在准确预测未来建筑时的适用性 - 平均F1得分为0.85,准确性为82.2% - 这是52.2% - 胜过基线。此外,我们展示了我们的培训管道如何解决数据的空间稀疏性。
translated by 谷歌翻译
图像重新定位旨在更改图像大小,同时保留重要内容并最大程度地减少明显的扭曲。但是,先前的图像重新定位方法创建了遭受工件和扭曲的输出。此外,大多数以前的作品都尝试同时重新定位输入图像的背景和前景。同时调整前景和背景会导致对象的长宽比的变化。纵横比的变化对于人类对象并不理想。我们提出了一种克服这些问题的重新定位方法。提出的方法包括以下步骤。首先,一种涂上方法使用输入图像和前景对象的二进制掩码来生成背景图像,而无需任何前景对象。其次,接缝雕刻方法将背景图像调整到目标大小。然后,一种超分辨率方法增加了输入图像质量,然后提取前景对象。最后,将重定位的背景和提取的超级分辨对象馈入粒子群优化算法(PSO)中。 PSO算法使用审美质量评估作为其目标函数,以确定将对象放置在背景中的最佳位置和大小。我们使用图像质量评估和美学质量评估措施来显示我们与流行的图像重新定位技术相比的优越结果。
translated by 谷歌翻译